258 research outputs found

    Spatial audio in small display screen devices

    Get PDF
    Our work addresses the problem of (visual) clutter in mobile device interfaces. The solution we propose involves the translation of technique-from the graphical to the audio domain-for expliting space in information representation. This article presents an illustrative example in the form of a spatialisedaudio progress bar. In usability tests, participants performed background monitoring tasks significantly more accurately using this spatialised audio (a compared with a conventional visual) progress bar. Moreover, their performance in a simultaneously running, visually demanding foreground task was significantly improved in the eye-free monitoring condition. These results have important implications for the design of multi-tasking interfaces for mobile devices

    Latency Associated Peptide Has In Vitro and In Vivo Immune Effects Independent of TGF-β1

    Get PDF
    Latency Associated Peptide (LAP) binds TGF-β1, forming a latent complex. Currently, LAP is presumed to function only as a sequestering agent for active TGF-β1. Previous work shows that LAP can induce epithelial cell migration, but effects on leukocytes have not been reported. Because of the multiplicity of immunologic processes in which TGF-β1 plays a role, we hypothesized that LAP could function independently to modulate immune responses. In separate experiments we found that LAP promoted chemotaxis of human monocytes and blocked inflammation in vivo in a murine model of the delayed-type hypersensitivity response (DTHR). These effects did not involve TGF-β1 activity. Further studies revealed that disruption of specific LAP-thrombospondin-1 (TSP-1) interactions prevented LAP-induced responses. The effect of LAP on DTH inhibition depended on IL-10. These data support a novel role for LAP in regulating monocyte trafficking and immune modulation

    Do I Have My Attention? Speed of Processing Advantages for the Self-Face Are Not Driven by Automatic Attention Capture

    Get PDF
    We respond more quickly to our own face than to other faces, but there is debate over whether this is connected to attention-grabbing properties of the self-face. In two experiments, we investigate whether the self-face selectively captures attention, and the attentional conditions under which this might occur. In both experiments, we examined whether different types of face (self, friend, stranger) provide differential levels of distraction when processing self, friend and stranger names. In Experiment 1, an image of a distractor face appeared centrally – inside the focus of attention – behind a target name, with the faces either upright or inverted. In Experiment 2, distractor faces appeared peripherally – outside the focus of attention – in the left or right visual field, or bilaterally. In both experiments, self-name recognition was faster than other name recognition, suggesting a self-referential processing advantage. The presence of the self-face did not cause more distraction in the naming task compared to other types of face, either when presented inside (Experiment 1) or outside (Experiment 2) the focus of attention. Distractor faces had different effects across the two experiments: when presented inside the focus of attention (Experiment 1), self and friend images facilitated self and friend naming, respectively. This was not true for stranger stimuli, suggesting that faces must be robustly represented to facilitate name recognition. When presented outside the focus of attention (Experiment 2), no facilitation occurred. Instead, we report an interesting distraction effect caused by friend faces when processing strangers’ names. We interpret this as a “social importance” effect, whereby we may be tuned to pick out and pay attention to familiar friend faces in a crowd. We conclude that any speed of processing advantages observed in the self-face processing literature are not driven by automatic attention capture

    Task Attention Facilitates Learning of Task-Irrelevant Stimuli

    Get PDF
    Attention plays a fundamental role in visual learning and memory. One highly established principle of visual attention is that the harder a central task is, the more attentional resources are used to perform the task and the smaller amount of attention is allocated to peripheral processing because of limited attention capacity. Here we show that this principle holds true in a dual-task setting but not in a paradigm of task-irrelevant perceptual learning. In Experiment 1, eight participants were asked to identify either bright or dim number targets at the screen center and to remember concurrently presented scene backgrounds. Their recognition performances for scenes paired with dim/hard targets were worse than those for scenes paired with bright/easy targets. In Experiment 2, eight participants were asked to identify either bright or dim letter targets at the screen center while a task-irrelevant coherent motion was concurrently presented in the background. After five days of training on letter identification, participants improved their motion sensitivity to the direction paired with hard/dim targets improved but not to the direction paired with easy/bright targets. Taken together, these results suggest that task-irrelevant stimuli are not subject to the attentional control mechanisms that task-relevant stimuli abide

    Speech Cues Contribute to Audiovisual Spatial Integration

    Get PDF
    Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral ‘what’ and dorsal ‘where’ pathways

    Finding Your Mate at a Cocktail Party: Frequency Separation Promotes Auditory Stream Segregation of Concurrent Voices in Multi-Species Frog Choruses

    Get PDF
    Vocal communication in crowded social environments is a difficult problem for both humans and nonhuman animals. Yet many important social behaviors require listeners to detect, recognize, and discriminate among signals in a complex acoustic milieu comprising the overlapping signals of multiple individuals, often of multiple species. Humans exploit a relatively small number of acoustic cues to segregate overlapping voices (as well as other mixtures of concurrent sounds, like polyphonic music). By comparison, we know little about how nonhuman animals are adapted to solve similar communication problems. One important cue enabling source segregation in human speech communication is that of frequency separation between concurrent voices: differences in frequency promote perceptual segregation of overlapping voices into separate “auditory streams” that can be followed through time. In this study, we show that frequency separation (ΔF) also enables frogs to segregate concurrent vocalizations, such as those routinely encountered in mixed-species breeding choruses. We presented female gray treefrogs (Hyla chrysoscelis) with a pulsed target signal (simulating an attractive conspecific call) in the presence of a continuous stream of distractor pulses (simulating an overlapping, unattractive heterospecific call). When the ΔF between target and distractor was small (e.g., ≤3 semitones), females exhibited low levels of responsiveness, indicating a failure to recognize the target as an attractive signal when the distractor had a similar frequency. Subjects became increasingly more responsive to the target, as indicated by shorter latencies for phonotaxis, as the ΔF between target and distractor increased (e.g., ΔF = 6–12 semitones). These results support the conclusion that gray treefrogs, like humans, can exploit frequency separation as a perceptual cue to segregate concurrent voices in noisy social environments. The ability of these frogs to segregate concurrent voices based on frequency separation may involve ancient hearing mechanisms for source segregation shared with humans and other vertebrates

    Evaluations on underdetermined blind source separation in adverse environments using time-frequency masking

    Get PDF
    The successful implementation of speech processing systems in the real world depends on its ability to handle adverse acoustic conditions with undesirable factors such as room reverberation and background noise. In this study, an extension to the established multiple sensors degenerate unmixing estimation technique (MENUET) algorithm for blind source separation is proposed based on the fuzzy c-means clustering to yield improvements in separation ability for underdetermined situations using a nonlinear microphone array. However, rather than test the blind source separation ability solely on reverberant conditions, this paper extends this to include a variety of simulated and real-world noisy environments. Results reported encouraging separation ability and improved perceptual quality of the separated sources for such adverse conditions. Not only does this establish this proposed methodology as a credible improvement to the system, but also implies further applicability in areas such as noise suppression in adverse acoustic environments

    Individual working memory capacity is uniquely correlated with feature-based attention when combined with spatial attention

    Get PDF
    A growing literature suggests that working memory and attention are closely related constructs. Both involve the selection of task-relevant information, and both are characterized by capacity limits. Furthermore, studies using a variety of methodological approaches have demonstrated convergent working memory and attention-related processing at the individual, neural and behavioral level. Given the varieties of both constructs, the specific kinds of attention and WM must be considered. We find that individuals’ working memory capacity (WMC) uniquely interacts with feature-based attention when combined with spatial attention in a cuing paradigm (Posner, 1980). Our findings suggest a positive correlation between WM and feature-based attention only within the spotlight of spatial attention. This finding lends support to the controlled attention view of working memory by demonstrating that integrated feature-based expectancies are uniquely correlated with individual performance on a working memory task
    corecore